275 research outputs found

    The benefits of synchronous collaborative information visualization: evidence from an experimental evaluation

    Get PDF
    A great corpus of studies reports empirical evidence of how information visualization supports comprehension and analysis of data. The benefits of visualization for synchronous group knowledge work, however, have not been addressed extensively. Anecdotal evidence and use cases illustrate the benefits of synchronous collaborative information visualization, but very few empirical studies have rigorously examined the impact of visualization on group knowledge work. We have consequently designed and conducted an experiment in which we have analyzed the impact of visualization on knowledge sharing in situated work groups. Our experimental study consists of evaluating the performance of 131 subjects (all experienced managers) in groups of 5 (for a total of 26 groups), working together on a real-life knowledge sharing task. We compare (1) the control condition (no visualization provided), with two visualization supports: (2) optimal and (3) suboptimal visualization (based on a previous survey). The facilitator of each group was asked to populate the provided interactive visual template with insights from the group, and to organize the contributions according to the group consensus. We have evaluated the results through both objective and subjective measures. Our statistical analysis clearly shows that interactive visualization has a statistically significant, objective and positive impact on the outcomes of knowledge sharing, but that the subjects seem not to be aware of this. In particular, groups supported by visualization achieved higher productivity, higher quality of outcome and greater knowledge gains. No statistically significant results could be found between an optimal and a suboptimal visualization though (as classified by the pre-experiment survey). Subjects also did not seem to be aware of the benefits that the visualizations provided as no difference between the visualization and the control conditions was found for the self-reported measures of satisfaction and participation. An implication of our study for information visualization applications is to extend them by using real-time group annotation functionalities that aid in the group sense making process of the represented data

    A Structured Approach for Designing Collaboration Experiences for Virtual Worlds

    Get PDF
    While 3D virtual worlds are more frequently being used as interactive environments for collaboration, there is still no structured approach developed specifically for the combined design of 3D virtual environments and the collaborative activities in them. We argue that formalizing both the structural elements of virtual worlds and aspects of collaborative work or collaborative learning helps to develop fruitful collaborative work and learning experiences. As such, we present the avatar-based collaboration framework (ABC framework). Based on semiotics theory, the framework puts the collaborating groups into the center of the design and emphasizes the use of distinct features of 3D virtual worlds for use in collaborative learning environments and activities. In developing the framework, we have drawn from best practices in instructional design and game design, research in HCI, and findings and observations from our own empirical research that investigates collaboration patterns in virtual worlds. Along with the framework, we present a case study of its first application for a global collaborative learning project. This paper particularly addresses virtual world designers, educators, meeting facilitators, and other practitioners by thoroughly describing the process of creating rich collaboration and collaborative learning experiences for virtual worlds with the ABC framework

    The NEST software development infrastructure

    Get PDF
    Software development in the Computational Sciences has reached a critical level of complexity in the recent years. This “complexity bottleneck” occurs for both the programming languages and technologies that are used during development and for the infrastructure, which is needed to sustain the development of large-scale software projects and keep the code base manageable [1].As the development shifts from specialized and solution-tailored in-house code (often developed by a single or only few developers) towards more general software packages written by larger teams of programmers, it becomes inevitable to use professional software engineering tools also in the realm of scientific software development. In addition the move to collaboration-based large-scale projects (e.g. BrainScaleS) also means a larger user base, which depends and relies on the quality and correctness of the code.In this contribution, we present the tools and infrastructure that have been introduced over the years to support the development of NEST, a simulator for large networks of spiking neuronal networks [2]. In particular, we show our use of• version control systems• bug tracking software• web-based wiki and blog engines• frameworks for carrying out unit tests• systems for continuous integration.References:[1] Gregory Wilson (2006). Where's the Real Bottleneck in Scientific Computing? American Scientist, 94(1): 5-6, doi:10.1511/2006.1.5.[2] Marc-Oliver Gewaltig and Markus Diesmann (2007) NEST (Neural Simulation Tool), Scholarpedia, 2(4): 1430

    Automatic generation of software interfaces for supporting decision-making processes. An application of domain engineering and machine learning

    Get PDF
    Information dashboards are sophisticated tools. Although they enable users to reach useful insights and support their decisionmaking challenges, a good design process is essential to obtain powerful tools. Users need to be part of these design processes, as they will be the consumers of the information displayed. But users are very diverse and can have different goals, beliefs, preferences, etc., and creating a new dashboard for each potential user is not viable. There exist several tools that allow users to configure their displays without requiring programming skills. However, users might not exactly know what they want to visualize or explore, also becoming the configuration process a tedious task. This research project aims to explore the automatic generation of user interfaces for supporting these decisionmaking processes. To tackle these challenges, a domain engineering, and machine learning approach is taken. The main goal is to automatize the design process of dashboards by learning from the context, including the end-users and the target data to be displayed

    Meeting the Memory Challenges of Brain-Scale Network Simulation

    Get PDF
    The development of high-performance simulation software is crucial for studying the brain connectome. Using connectome data to generate neurocomputational models requires software capable of coping with models on a variety of scales: from the microscale, investigating plasticity, and dynamics of circuits in local networks, to the macroscale, investigating the interactions between distinct brain regions. Prior to any serious dynamical investigation, the first task of network simulations is to check the consistency of data integrated in the connectome and constrain ranges for yet unknown parameters. Thanks to distributed computing techniques, it is possible today to routinely simulate local cortical networks of around 105 neurons with up to 109 synapses on clusters and multi-processor shared-memory machines. However, brain-scale networks are orders of magnitude larger than such local networks, in terms of numbers of neurons and synapses as well as in terms of computational load. Such networks have been investigated in individual studies, but the underlying simulation technologies have neither been described in sufficient detail to be reproducible nor made publicly available. Here, we discover that as the network model sizes approach the regime of meso- and macroscale simulations, memory consumption on individual compute nodes becomes a critical bottleneck. This is especially relevant on modern supercomputers such as the Blue Gene/P architecture where the available working memory per CPU core is rather limited. We develop a simple linear model to analyze the memory consumption of the constituent components of neuronal simulators as a function of network size and the number of cores used. This approach has multiple benefits. The model enables identification of key contributing components to memory saturation and prediction of the effects of potential improvements to code before any implementation takes place. As a consequence, development cycles can be shorter and less expensive. Applying the model to our freely available Neural Simulation Tool (NEST), we identify the software components dominant at different scales, and develop general strategies for reducing the memory consumption, in particular by using data structures that exploit the sparseness of the local representation of the network. We show that these adaptations enable our simulation software to scale up to the order of 10,000 processors and beyond. As memory consumption issues are likely to be relevant for any software dealing with complex connectome data on such architectures, our approach and our findings should be useful for researchers developing novel neuroinformatics solutions to the challenges posed by the connectome project

    Identity ambiguity and the promises and practices of hybrid e-HRM project teams

    Get PDF
    The role of IS project team identity work in the enactment of day-to-day relationships with their internal clients is under-researched. We address this gap by examining the identity work undertaken by an electronic human resource management (e-HRM) 'hybrid' project team engaged in an enterprise-wide IS implementation for their multi-national organisation. Utilising social identity theory, we identify three distinctive, interrelated dimensions of project team identity work (project team management, team 'value propositions' (promises) and the team's 'knowledge practice'). We reveal how dissonance between two perspectives of e-HRM project identity work (clients' expected norms of project team's service and project team's expected norms of themselves) results in identity ambiguity. Our research contributions are to identity studies in the IS project management, HR and hybrid literatures and to managerial practice by challenging the assumption that hybrid experts are the panacea for problems associated with IS projects

    Deploying and Optimizing Embodied Simulations of Large-Scale Spiking Neural Networks on HPC Infrastructure

    Get PDF
    Simulating the brain-body-environment trinity in closed loop is an attractive proposal to investigate how perception, motor activity and interactions with the environment shape brain activity, and vice versa. The relevance of this embodied approach, however, hinges entirely on the modeled complexity of the various simulated phenomena. In this article, we introduce a software framework that is capable of simulating large-scale, biologically realistic networks of spiking neurons embodied in a biomechanically accurate musculoskeletal system that interacts with a physically realistic virtual environment. We deploy this framework on the high performance computing resources of the EBRAINS research infrastructure and we investigate the scaling performance by distributing computation across an increasing number of interconnected compute nodes. Our architecture is based on requested compute nodes as well as persistent virtual machines; this provides a high-performance simulation environment that is accessible to multi-domain users without expert knowledge, with a view to enable users to instantiate and control simulations at custom scale via a web-based graphical user interface. Our simulation environment, entirely open source, is based on the Neurorobotics Platform developed in the context of the Human Brain Project, and the NEST simulator. We characterize the capabilities of our parallelized architecture for large-scale embodied brain simulations through two benchmark experiments, by investigating the effects of scaling compute resources on performance defined in terms of experiment runtime, brain instantiation and simulation time. The first benchmark is based on a large-scale balanced network, while the second one is a multi-region embodied brain simulation consisting of more than a million neurons and a billion synapses. Both benchmarks clearly show how scaling compute resources improves the aforementioned performance metrics in a near-linear fashion. The second benchmark in particular is indicative of both the potential and limitations of a highly distributed simulation in terms of a trade-off between computation speed and resource cost. Our simulation architecture is being prepared to be accessible for everyone as an EBRAINS service, thereby offering a community-wide tool with a unique workflow that should provide momentum to the investigation of closed-loop embodiment within the computational neuroscience community.journal articl

    Spectrum standardization for laser-induced breakdown spectroscopy measurements

    Full text link
    This paper presents a spectra normalization method for laser-induced breakdown spectroscopy (LIBS) measurements by converting the recorded characteristic line intensity at varying conditions to the intensity under a standard condition with standard plasma temperature, degree of ionization, and total number density of the interested species to reduce the measurement uncertainty. The characteristic line intensities of the interested species are first converted to the intensity at a fixed temperature and standard degree of ionization but varying total number density for each laser pulse analysis. Under this state, if the influence of the variation of plasma morphology is neglected, the sum of multiple spectral line intensities for the measured element can be regarded proportional to the total number density of the specific element, and the fluctuation of the total number density, or the variation of ablation mass, was compensated for by the application of this relationship. In the experiments with 29 brass alloy samples, the application of this method to determine Cu concentration shows a significant improvement over generally applied normalization method for measurement precision and accuracy. The average RSD value, average value of the error bar, R2, RMSEP, and average value of the maximum relative error were: 5.29%, 0.68%, 0.98, 2.72%, 16.97%, respectively, while the above parameter values for normalization with the whole spectrum area were: 8.61%, 1.37%, 0.95, 3.28%, 29.19%, respectively.Comment: LIBS; Normalization; quantitative measurement; plasma propert
    • …
    corecore